Goto

Collaborating Authors

 practical quasi-newton method


Practical Quasi-Newton Methods for Training Deep Neural Networks

Neural Information Processing Systems

We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian has n^2 elements. Consequently, computing and storing a full n times n BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC, which computes a Kronecker-factored block diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded. In tests on autoencoder feed-forward network models with either nine or thirteen layers applied to three datasets, our methods outperformed or performed comparably to KFAC and state-of-the-art first-order stochastic methods.


Review for NeurIPS paper: Practical Quasi-Newton Methods for Training Deep Neural Networks

Neural Information Processing Systems

Weaknesses: There are some weaknesses of QN methods applied to deep neural networks that also somewhat limit the applicability of the proposed algorithm. First, there are additional hyperparameters compared to first-order methods that need to be tuned beyond learning rate, namely damping terms (two for this algorithm), decay parameter for calculating moving-average to stabilize BFGS updates and the memory-parameter p for LBFGS. The authors merged the two damping terms into a single hyperparameter assuming some relation between them and performed sensitivity analysis, however a systematic way of tuning all these hyperparameters to a new application remains a bit challenging. Second, generalization performance of deep networks trained via second-order methods might lag behind first-order methods such as small batch SGD and Adam, especially without careful hyperparameter tuning. The authors have chosen not to include generalization results in the experiments and argued that the focus of the paper is comparing optimization techniques.


Review for NeurIPS paper: Practical Quasi-Newton Methods for Training Deep Neural Networks

Neural Information Processing Systems

This paper develops a quasi-newton optimization algorithm for training fully connected neural networks. The authors develop an LBFGS like approximation of the Hessian via a block kronecker product factorization. On the experimental side they demonstrate the acceleration effect of the proposed methods for autoencoder feed-forward neural network models with either nine or thirteen layers applied to three datasets. The authors also provide some theoretical guarantees on convergence to stationarity with additional assumptions. All reviewers found the paper interesting.


Practical Quasi-Newton Methods for Training Deep Neural Networks

Neural Information Processing Systems

We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian has n 2 elements. Consequently, computing and storing a full n times n BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC, which computes a Kronecker-factored block diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded.